Emulation’s scheduling challenge

By Chris Edwards |  No Comments  |  Posted: October 28, 2021
Topics/Categories: Blog - EDA, IP  |  Tags: , , ,  | Organizations: , , ,

Emulation has become a mainstay of the development of larger IP cores and SoCs, panelists at this week’s DVCon Europe (October 26th, 2021) affirmed. The problem they face today is one of making sure they have enough capacity to support multiple projects.

Though the panel was ostensibly on the must-have tools for the overall verification flow, attention quickly focused on what engineers see as the vital role of hardware acceleration for simulation.

AMD corporate fellow Alex Starr said though formal verification is improving in terms of scope and capacity, RTL simulation remains the staple of verification. “But the industry is generally struggling to contain its needs within RTL simulation, so we’ve seen growth in the use of emulation,” he said, noting that another form seeing increasing use at AMD is what he called “enterprise prototyping”: a combination of FPGA prototyping and emulation used simultaneously on a design. “It is becoming a must have tool.”

Daniel Schostak, Arm architect and verification fellow, added, “There has been a step change in what formal verification can cope with but it’s still not the point where we can discard simulation or not worry about emulation.”

Hybrid models

Nasr Ullah, senior director of performance architecture at SiFive, said emulation and FPGA prototyping are valuable elements of the company’s verification efforts but added: “One important thing that’s coming out is the issue of scalability.” Even if it is not implemented using cloud EDA, Ullah and others are looking for easier ways to scale capacity to fit the needs of projects as they near deadlines without squeezing out important block-level work in designs that are not as far along.

“The dependency on emulation is a little different to what it was five years ago. It’s now baked into the schedules. But it becomes challenging when you’re approaching tapeout on one design and realize you need more cycles and you have other programs competing. You want to be able to turn the [capacity] knob up to eleven. As an industry we have to figure out how to be able to do that. A cloud or burst model is something the industry clearly needs,” Starr said.

Scale at the high end is largely being pushed by applications such as machine learning. “AI applications are inherently large,” explained Ty Garibay, vice president of engineering at Mythic. “There are no small benchmarks there.”

Test Tetris

The differences in size of module that needs emulation or hardware acceleration can lead to scheduling headaches even if capacity is notionally available. “Smaller jobs may be on a batch schedule to get on the system when they can. When a 5 billion-gate design comes on, it scatters the smaller ones,” said Starr. When the big jobs go away, smaller designs can use a larger fraction of the capacity but then there is the risk of blocking the large design if it needs more runtime. “You need to displace the smaller jobs to make way.

“The problem is working out a scheduling system that can organize the different priorities. You can situations where though you are draining the system for 5 billion gates and find there is capacity not being used, which you don’t like. Do you need to reserve certain systems for big designs? It’s like playing Tetris in your head,” Starr added.

Ullah noted, “You will never have enough emulators. But a lot of companies don’t think about scheduling. The people you have to hire to put these schemas in place have a very different skill set. This is infrastructure and infrastructure is becoming very important.”

Though emulation in the cloud has appeared, users are looking to be able to use that for burst access but the panelists acknowledged it would be a challenge for the EDA vendors to support highly variable workloads. “It comes down to a business-model issue, which is not that different to EDA in the cloud in general,” Garibay said. “How do you build it and justify the investment?”

Block interactions

As well as scaling emulation capacity, issues raised by system-level integration need to be addressed. “Interoperability is important. We want to use RTL simulation, move to emulation and then be able to go back to RTL,” Ullah added, pointing to the visibility differences that different forms of simulation allow. “We also need to go beyond traditional test-case generators and move to things that target the system level more effectively.”

Ullah used an example from a previous employer where problems surfaced in a flight simulator because of an interaction between the software and the hardware. The pilot could put the simulated aircraft into a series of tight turns. After the seventh, the system would crash: the result of too many nested interrupts.

Schostak said AI applications are changing how the blocks in a design interact each other, which can prove challenging for verification that might be split across emulators and other forms of hardware acceleration. The long pipelines of some processors that extensively on local memory, such as GPUs, make it relatively simple to split off functionality. “If you are scaling other types of CPU where you’ve got a lot more interaction between cores, it’s difficult to think of ways to put in sufficient buffering [to support high-throughput tests].”

The interactions are even more complex for a company like Mythic, which combines analog in-memory computing with logic in its machine-learning arrays. “What’s important to us is the ability to model the complete and see the interaction of our chip with a variety of hosts,” Garibay said, which means tying logic simulation and emulation together with real-number modelling and similar techniques. “That is still challenging and we would appreciate better ways to do it.”

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors